公平被广泛认为是医疗保健道德的基础。在临床决策的背景下,它取决于智力的比较忠诚(基于证据或直观),指导每个患者的管理。尽管当代机器学习的个性化力量最近引起了人们的关注,但这种认知公平是在任何决策指导的背景下,无论是传统还是创新的。然而,目前没有一般的量化框架,更不用说保证了。在这里,我们根据模型的忠诚度来制定认知公平性,这些模型是对所学的多维表述评估的,这些身份的多维表示,旨在最大程度地提高人口的捕获多样性,从而引入了代表性道德模型校准的全面框架。我们证明了该框架在来自英国生物库的大规模多模式数据上的使用来得出人口的各种表示,量化模型绩效并提出了响应良好的补救。我们提供方法作为量化和确保医疗保健认知公平的原则解决方案,并在整个研究,临床和监管领域中进行了应用。
translated by 谷歌翻译
背景:越来越多地认识到,仅根据常规临床护理得出的完全包容的大规模收集,脑部肿瘤的复杂异质性越来越多。这是当代机器学习可以促进的一项任务,尤其是在神经影像方面,但是它处理在现实世界中临床实践中常见的不完整数据的能力仍然未知。在这里,我们将最新方法应用于大规模的多站点MRI数据,以量化自动化肿瘤分割模型的比较保真度,以复制在临床现实中观察到的各种完整性水平。方法:我们将深度学习(NNU-NET衍生的)肿瘤分割模型与T1,对比增强的T1,T2和Flair Imaging序列的所有可能组合进行了比较,并在2021 Brats-brats-- RSNA胶质瘤人群为1251名患者,并对多样化的50例患者样本进行了测试。结果:经过训练的不完整数据分割病变的模型,通常等效于对完整数据培训的模型,表现为0.907(单个序列)至0.945(完整数据集)的骰子系数(全数据集),而0.701(单个序列)(单个序列)至0.891(完整的数据集中) )用于组件组织类型。不完整的数据分割模型可以在没有对比成像的情况下准确检测增强肿瘤,从而用R2在0.95-0.97之间量化其体积。结论:深度学习分割模型在缺少数据时很好地表征了肿瘤,甚至可以在不使用对比度的情况下检测增强组织。这表明转化为临床实践(不完整的数据是常见的,可能比迄今为止认为的要容易得多,并且在减少对比鲜明使用的依赖性方面可能具有价值。
translated by 谷歌翻译
Designing experiments often requires balancing between learning about the true treatment effects and earning from allocating more samples to the superior treatment. While optimal algorithms for the Multi-Armed Bandit Problem (MABP) provide allocation policies that optimally balance learning and earning, they tend to be computationally expensive. The Gittins Index (GI) is a solution to the MABP that can simultaneously attain optimality and computationally efficiency goals, and it has been recently used in experiments with Bernoulli and Gaussian rewards. For the first time, we present a modification of the GI rule that can be used in experiments with exponentially-distributed rewards. We report its performance in simulated 2- armed and 3-armed experiments. Compared to traditional non-adaptive designs, our novel GI modified design shows operating characteristics comparable in learning (e.g. statistical power) but substantially better in earning (e.g. direct benefits). This illustrates the potential that designs using a GI approach to allocate participants have to improve participant benefits, increase efficiencies, and reduce experimental costs in adaptive multi-armed experiments with exponential rewards.
translated by 谷歌翻译
Language Models appear to perform poorly on quantification. We ask how badly. 'Few'-type quantifiers, as in 'few children like vegetables' might pose a particular challenge for Language Models, since the sentence components without the quantifier are likely to co-occur, and because 'few'-type quantifiers are rare. We present 960 sentences stimuli from two human neurolinguistic experiments to 22 autoregressive transformer models of differing sizes. Not only do the models perform poorly on 'few'-type quantifiers, but overall the larger the model, the worse its performance. We interpret this inverse scaling as suggesting that larger models increasingly reflect online rather than offline human processing, and argue that decreasing performance of larger models may challenge uses of Language Models as the basis for Natural Language Systems.
translated by 谷歌翻译
Are the predictions of humans and language models affected by similar things? Research suggests that while comprehending language, humans make predictions about upcoming words, with more predictable words being processed more easily. However, evidence also shows that humans display a similar processing advantage for highly anomalous words when these words are semantically related to the preceding context or to the most probable continuation. Using stimuli from 3 psycholinguistic experiments, we find that this is also almost always also the case for 8 contemporary transformer language models (BERT, ALBERT, RoBERTa, XLM-R, GPT-2, GPT-Neo, GPT-J, and XGLM). We then discuss the implications of this phenomenon for our understanding of both human language comprehension and the predictions made by language models.
translated by 谷歌翻译
Gauge Theory plays a crucial role in many areas in science, including high energy physics, condensed matter physics and quantum information science. In quantum simulations of lattice gauge theory, an important step is to construct a wave function that obeys gauge symmetry. In this paper, we have developed gauge equivariant neural network wave function techniques for simulating continuous-variable quantum lattice gauge theories in the Hamiltonian formulation. We have applied the gauge equivariant neural network approach to find the ground state of 2+1-dimensional lattice gauge theory with U(1) gauge group using variational Monte Carlo. We have benchmarked our approach against the state-of-the-art complex Gaussian wave functions, demonstrating improved performance in the strong coupling regime and comparable results in the weak coupling regime.
translated by 谷歌翻译
我们研究了Levin(1993)所述的动词交替类的程度和句子级预测任务。我们遵循并扩展了Kann等人的实验。(2019年),旨在探测静态嵌入是否编码动词的框架选择性。在单词和句子级别上,我们发现来自PLM的上下文嵌入不仅超过了非上下文嵌入,而且在大多数交替类中的任务上达到了惊人的高精度。此外,我们发现证据表明,PLM的中间层平均比所有探测任务中的较低层都能取得更好的性能。
translated by 谷歌翻译
某些语言允许在某些情况下省略参数。然而,人类语言理解者可靠地推断出这些零代词的预期参考人,部分原因是他们建立了对哪些参考人更有可能的期望。我们询问神经语言模型是否也提取了相同的期望。我们测试了12种当代语言模型是否显示出反映人类行为的期望,这些句子暴露于Carminati(2005)中意大利五个行为实验中的零代词中。我们发现三个模型-XGLM 2.9B,4.5B和7.5B-从所有实验中捕获人类行为,而其他实验则成功地对某些结果进行了建模。该结果表明,人类对核心的期望可以从接触语言中得出,并且还指示了语言模型的特征,使他们能够更好地反映人类的行为。
translated by 谷歌翻译
强化学习中的固有问题是应对不确定要采取的行动(或状态价值)的政策。模型不确定性,更正式地称为认知不确定性,是指超出采样噪声的模型的预期预测误差。在本文中,我们提出了Q值函数中认知不确定性估计的度量,我们将其称为路线上的认知不确定性。我们进一步开发了一种计算其近似上限的方法,我们称之为f值。我们通过实验将后者应用于深Q-Networks(DQN),并表明增强学习中的不确定性估计是学习进步的有用指标。然后,我们提出了一种新的方法,通过从现有(以前学过的或硬编码)的甲骨文政策中学习不确定性的同时,旨在避免在训练过程中避免非生产性的随机操作,从而提高参与者批评算法的样本效率。我们认为这位评论家的信心指导了探索(CCGE)。我们使用我们的F-Value指标在软演奏者(SAC)上实施CCGE,我们将其应用于少数流行的健身环境,并表明它比有限的背景下的香草囊获得了更好的样本效率和全部情节奖励。
translated by 谷歌翻译
抽象推理是智能系统的关键能力。大型语言模型在抽象推理任务上实现了高度的性能,但表现出许多缺陷。但是,人类的抽象推理也是不完美的,并且取决于我们对推理问题内容的知识和信念。例如,人类对在日常情况下基于逻辑规则的逻辑规则比关于抽象属性的任意规则更可靠地理解。语言模型的培训经验类似地赋予了他们先前的期望,这些期望反映了人类的知识和信念。因此,我们假设语言模型会显示出类似人类的内容对抽象推理问题的影响。我们在三个逻辑推理任务中探讨了这一假设:自然语言推论,判断三段论的逻辑有效性和ison选择任务(Wason,1968)。我们发现,最新的大语言模型(具有7或700亿个参数; Hoffman等,2022)反映了这些任务中人类在人类中观察到的许多相同模式 - 像人类一样,模型对可信情况的理由更有效地理由不现实或抽象的。我们的发现对理解这些认知效应以及有助于语言模型表现的因素具有影响。
translated by 谷歌翻译